Explanation Strategies as an Empirical-Analytical Lens for Socio-Technical Contextualization of Machine Learning Interpretability

نویسندگان

چکیده

During a research project in which we developed machine learning (ML) driven visualization system for non-ML experts, reflected on interpretability ML, computer-supported collaborative work and human-computer interaction. We found that while there are manifold technical approaches, these often focus ML experts evaluated decontextualized empirical studies. hypothesized participatory design may support the understanding of stakeholders' situated sense-making our project, yet, guidance regarding inexhaustive. Building philosophy technology, formulated explanation strategies as an empirical-analytical lens explicating how explanations mediate contextual preferences concerning people's interpretations. In this paper, contribute report proof-of-concept use to analyze co-design workshop with methodological implications research, suggest further investigation technological mediation theories space.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

development and implementation of an optimized control strategy for induction machine in an electric vehicle

in the area of automotive engineering there is a tendency to more electrification of power train. in this work control of an induction machine for the application of electric vehicle is investigated. through the changing operating point of the machine, adapting the rotor magnetization current seems to be useful to increase the machines efficiency. in the literature there are many approaches wh...

15 صفحه اول

Machine Learning Model Interpretability for Precision Medicine

Interpretability of machine learning models is critical for data-driven precision medicine efforts. However, highly predictive models are generally complex and are difficult to interpret. Here using Model-Agnostic Explanations algorithm, we show that complex models such as random forest can be made interpretable. Using MIMIC-II dataset, we successfully predicted ICU mortality with 80% balanced ...

متن کامل

Software Market Configuration: A Socio-Technical Explanation

In the case presented herein, two three-dimensional rendering software products coexist without competing, though they present similar characteristics and rely on competitive technological architectures. The market configuration for these two software products thus appears largely determined by socio-technical elements, not just the technical characteristics of the software architecture. The so...

متن کامل

Interpretability of Machine Learning Models and Representations: an Introduction

Interpretability is often a major concern in machine learning. Although many authors agree with this statement, interpretability is often tackled with intuitive arguments, distinct (yet related) terms and heuristic quantifications. This short survey aims to clarify the concepts related to interpretability and emphasises the distinction between interpreting models and representations, as well as...

متن کامل

Model-Agnostic Interpretability of Machine Learning

Understanding why machine learning models behave the way they do empowers both system designers and end-users in many ways: in model selection, feature engineering, in order to trust and act upon the predictions, and in more intuitive user interfaces. Thus, interpretability has become a vital concern in machine learning, and work in the area of interpretable models has found renewed interest. I...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Proceedings of the ACM on human-computer interaction

سال: 2022

ISSN: ['2573-0142']

DOI: https://doi.org/10.1145/3492858